Видео с ютуба Neural Network Vulnerabilities
 
        Deep Learning's Most Dangerous Vulnerability: Adversarial Attacks at Silicon Valley Code Camp 2019
 
        Visualizing the Impact of Adversarial Attacks on Perception in Convolutional Neural Networks
 
        32 - Neural Network Assisted Fuzzing - Discovering Software Vulnerabilities Using Deep Learning
 
        Exploring Vulnerabilities in Spiking Neural Networks: Direct Adversarial Attacks on Raw Event Data
 
        Graph Neural Network-based Vulnerability Predication
 
        Evasion Attacks on Neural Networks
 
        Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
 
        USENIX Security '24 - Hijacking Attacks against Neural Network by Analyzing Training Data
 
        Security Vulnerabilities in Machine Learning
 
        Adversarial Attacks on AI: Impact and Defenses
 
        Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks
 
        LineVD: Statement-level Vulnerability Detection using Graph Neural Networks
 
        What Are Adversarial Attacks On CNNs? - AI and Machine Learning Explained
 
        Deep Learning Based Vulnerability Detection : Are We There Yet?
 
        Model Stealing Attacks Against Inductive Graph Neural Networks
 
        VCodeDet: a Graph Neural Network for Source Code Vulnerability Detection
 
        Adversarial Attacks on Neural Networks - Bug or Feature?
 
        Model Stealing Attacks Against Inductive Graph Neural Networks
 
        USENIX Security '22 - Inference Attacks Against Graph Neural Networks
 
        Why Are Deep Learning Models Vulnerable To Adversarial Attacks? - Tech Terms Explained